Do machines dream of atoms? Crippen's logP as a quantitative molecular benchmark for explainable AI heatmaps

نویسندگان

چکیده

While there is a great deal of interest in methods aimed at explaining machine learning predictions chemical properties, it difficult to quantitatively benchmark such methods, especially for regression tasks. We show that the Crippen logP model (J. Chem. Inf. Comput. Sci. 1999, 39, 868) provides an excellent atomic attribution/heatmap approaches, if ground truth heatmaps can be adjusted reflect molecular representation. The “atom attribution from finger prints”-method developed by Riniker and Landrum 2013, 5, 43) gives are reasonable agreement with contribution most molecules, average heatmap overlaps up 0.54. increased significantly (to 0.75) when contributions match fact representation fragment-based rather than atom-based (the print-adapted (FPA) vector). Most corresponding FPA relatively insensitive training set size results close converged 1000 although molecules low overlap some change significantly. Using “remove atom” approach graph convolutional neural networks (GCNNs) suggested Matveieva Polishchuk Cheminform. 2021, 13, 41) we find 0.47 model. Like simpler benchmarks classification tasks have come before it, this work sets bar

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Transparent, explainable, and accountable AI for robotics

Recent governmental statements from the United States (USA) (1, 2), the European Union (EU) (3), and China (4) identify artificial intelligence (AI) and robotics as economic and policy priorities. Despite this enthusiasm, chal lenges remain. Systems can make unfair and discriminatory decisions, replicate or develop biases, and behave in inscrutable and unexpected ways in highly sensitive enviro...

متن کامل

Explainable Restricted Boltzmann Machines for Collaborative Filtering

Most accurate recommender systems are black-box models, hiding the reasoning behind their recommendations. Yet explanations have been shown to increase the user’s trust in the system in addition to providing other benefits such as scrutability, meaning the ability to verify the validity of recommendations. This gap between accuracy and transparency or explainability has generated an interest in...

متن کامل

Do the Self-Knowing Machines Dream of Knowing Their Factivity?

The Gödelian Arguments represent the effort done to interpret Gödel’s Incompleteness Theorems in order to show that minds cannot be explained in purely mechanist terms. With the purpose of proving the limits of mechanistic theses and investigate aspects of the ChurchTuring Thesis, several results obtained in the formal setting of Epistemic Arithmetic (EA) reveal the relation among different pro...

متن کامل

What do we need to build explainable AI systems for the medical domain?

Artificial intelligence (AI) generally and machine learning (ML) specifically demonstrate impressive practical success in many different application domains, e.g. in autonomous driving, speech recognition, or recommender systems. Deep learning approaches, trained on extremely large data sets or using reinforcement learning methods have even exceeded human performance in visual tasks, particular...

متن کامل

Scheduling balanced task-graphs to LogP-machines

This article discusses algorithms for scheduling task graphs G V E to LogP machines These algorithms depend on the granularity of G i e on the ratio of computation v and communication times in the LogP cost model and on the structure of G We de ne a class of coarse grained task graphs that can be scheduled with a performance guarantee of Topt G where Topt G is the time required for the optimal ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SciPost chemistry

سال: 2023

ISSN: ['2772-6762']

DOI: https://doi.org/10.21468/scipostchem.2.1.002